Smart Surveillance System

ECE5725 Final Project
A Project By Kuang Sheng & Wenjun Ma.



Introduction

Building as one of the most crucial components of infrastructure that facilitates human’s daily life and work, it is determined to enhance occupant experience in terms of both security level and the interior atmosphere. Video and CCTV systems have been implemented for security applications for decades, they provide remote surveillance for security operators and keep a video record of the spaces under monitoring.

The system employs facial recognition technology to control access to the building. Authorized individuals are immediately recognized, and access is seamlessly granted. Upon entry, the system can display relevant IAQ and ambient data at the gate so that users can adjust the office environment by using their mobile phones before heading to their office rooms. A cosy workspace will be prepared beforehand. Moreover, an admin system is designed for the building administrators to manipulate the grant access to different people. For example, the admin can add or remove authorization to a certain person based on the admin system. The security can control the orientation of the surveillance camera to cover a wider range.


Generic placeholder image

Project Objective:

  • Facial Recognition: Identify individuals by comparing captured images against a local dataset of authorized users, ensuring access is granted only to recognized individuals.
  • Administrative Functions: Provide administrative capabilities through a PiTFT display interface, allowing an admin to add and delete users, and display user information.
  • Real-time Feedback: Both the facial recognition for access control and the add user function will display real-time video feedback on the PiTFT, enhancing user interaction and system responsiveness.
  • Wireless Communication: Sensor data and user's commands should be transmitted wirelessly, seamlessly, and simultaneously.

Design

Generic placeholder image

The system can be divided into two main parts: the Main Gate and the Office. This section outlines the design and development of the two parts, focusing on integrating hardware and software to meet the functional requirements.

Main Gate Part:

Facial Recognition System

The Main Gate part features a critical security component— a real-time facial recognition system developed using OpenCV. Here’s how it functions:

To enhance user interaction and provide immediate feedback, we implemented a display feature on a 2.8-inch PiTFT. This setup not only shows the facial recognition process in real-time but also provides visual feedback to people who is authorized by administrative people to register their face into the system, enhancing transparency and conveniency to users.

Known Unknown

Admin System  

The administrative functionality was initially conceptualized as a console-based application, requiring physical keyboard inputs for operations such as adding or removing authorized users. To improve usability, especially for administrative staff without technical expertise, we transitioned to a more intuitive solution:


Challenges and Solutions

Main Gate Part:

During the initial phase of our project, we encountered a significant challenge in displaying real-time video from the facial detection and recognition processes directly on the PiTFT. Our initial approach was to direct the OpenCV window output to the PiTFT. Unfortunately, this method failed because the console mode set on the PiTFT was not compatible with OpenCV's `cv2.imshow()` function, which could not render windows directly onto the PiTFT. Upon investigating this issue, we discovered that the incompatibility stemmed from the specific display mode configuration of the PiTFT, which did not support direct rendering from OpenCV. To address this, we attempted to reconfigure the PiTFT to operate as an independent display for the Raspberry Pi. This approach, however, led to a system crash and corrupted our SD card, forcing us to restore the system from a backup. After reassessing our options, we decided to abandon the idea of using the PiTFT as a standalone display for OpenCV output. Instead, we adopted a more compatible solution using Pygame. This approach involved converting the OpenCV image frames into RGB format compatible with Pygame, allowing us to render the video feed on the PiTFT effectively. This method proved successful and allowed us to integrate real-time facial recognition feedback directly on the PiTFT without further system instability.                  

Office Part:

Initially, we was trying to develop Wi-Fi communication using the Adafruit QT Py ESP32-S3 Wi-Fi module because it has a small size and consumes very low power. Based on the product’s overview, it supports both Wi-Fi and Bluetooth Low Energy (BLE), making it a great choice for the prototyping of the smart surveillance system. However, as the project is going on, we found that despite being able to connect to Wi-Fi with WPA3 standard, the BLE implementation for the module is still under development and has limitations. The module does not have the capability to advertise anonymously and support pairing and bonding means that it would be very difficult to develop the applications between users’ mobile phones and the surveillance system. Therefore, we decided to use the Adafruit HUZZAH32 ESP32 Feather board. It is a direct upgrade of ESP8266 and supports both Wi-Fi and Bluetooth. Moreover, it provides abundant GPIOs so that it can integrate with many peripherals in the future. A python program was initially developed to try to automatically connect the device to the Cornell campus network 'RedRover', however, the device could be simpily pre-registered with the CIT by providing Mac addresses, thus it saved much time later.

In addition, the stack size of the ESP32 is not large enough to accommodate both Wi-Fi code and the BLE code. Despite efforts have been dedicated to optimize the code to be more memory efficient, but it eventually failed. Instead we further investigated the memory architecture of the ESP32 board. A portion of the flash memory is used to store the bootloader and the partition table, so that it only leaves a limited amount of space available for the application code. We solved this problem by re-allocating the space from the partition table. Eventually, the ESP32 board can both work for Wi-Fi and Bluetooth at the same time within a single program.

According to the original plan, there is also a JoyStick to control the rotation of the pan and tilt of the piCamera by the security. However, after further testing and investigation, the provided Joystick is an analog input device, while RPi cannot process analog data directly, a typical ADC is required. Therefore, due to lack of ADC, we used the four buttons on the piTFT to control the Pan and tilt servos. Two buttons are used to control the left and right rotating degree for the pan servo, and the other two buttons are used to control the left and right rotating degree for the tilt servo. Both servos are controlled by the PWM signal output by the GPIOs. However, as the development process going on, buttons should be reserved to either switch the program or exit the program, and it does not make sense for the security to control the camera by pressing the buttons. Therefore, it moved to control wirelessly, which is also more relaistic in real life.                  


Testing Procedures and Verification

To ensure that our facial recognition system and the associated administrative functions were performing as intended, we conducted a series of tests that were documented and demonstrated in our project video. Here is a detailed account of the testing phases:

1. Face Addition and Recognition Accuracy

Objective: Verify the system's ability to accurately recognize and differentiate between registered users.

Procedure: Using the admin function's 'Add User' feature in admin.py, we initially added two distinct user profiles—each corresponding to different team members. The process involved capturing facial data through the Pi Camera, which was then processed and stored locally for recognition purposes.

Test: After adding the profiles, we tested the facial recognition accuracy by presenting ourselves in front of the Pi Camera. The system was expected to recognize each face and match it to the correct user profile.

Result: The facial recognition system successfully identified and differentiated between the two faces, confirming the effectiveness of the capture and recognition algorithms.

2. User Deletion and Access Control

Objective: Assess the system's response to profile deletions and ensure that access rights are updated accordingly.

Procedure: We utilized the 'Delete User' function to remove one of the previously added user profiles. This test was designed to confirm that the deletion process not only removes the user data but also correctly updates the system's access controls.

Test:  Deleted user  and then the deleted userattempted to gain access via the facial recognition system.

Result: The system correctly identified the deleted user and denied access, demonstrating that the user deletion process effectively updates the facial recognition system's access parameters.

3. Sensor Integration and Environmental Response

Objective: Test the integration and responsiveness of additional sensors within the system, such as temperature and CO2 sensors.

Procedure: With one user profile still active and verified by the recognition system, we conducted tests to observe sensor responsiveness under different conditions. This included manually covering the temperature sensor to simulate a change in environmental conditions and breathing onto the CO2 sensor to detect changes in carbon dioxide levels.

Test: Monitor sensor readings for changes in response to the manipulated conditions.

Result: The sensors responded appropriately to the environmental changes—temperature readings fluctuated with physical obstruction, and CO2 levels increased with direct breath exposure and etc. This confirmed the functionality and integration of the sensors with the main system.


Result

After testing and demonstrating, Our Smart Surveillance System has successfully met its objectives as well as demonstrated its functions and automaticity in security and environmental monitoring for office buildings. Both devices to be placed at the main gate and the office rooms were prototyped successfully, and the concept of the smart surveillance system was well-proven. Since buildings are often multidisciplinary, and it usually involves various electronic systems and sensors to collect data and realize automation, the prototype system can explicitly break the barriers to understanding IoT applications in buildings from the perspective of built environment professionals and civil engineers. This workflow can be further investigated and potentially adopted by building experts.



Work Distribution

Generic placeholder image

Project group picture

Generic placeholder image

Wenjun Ma

wjm001012@gmail.com

Facial Recognition

Admin System

Generic placeholder image

Kuang Sheng

ks2352@cornell.edu

Wireless Communications and Controls,

Data Collecting and Processing

System Integration


Parts List

Total: $153.15


References

PiCamera Document
ESP32-Feather Datasheet
R-Pi GPIO Document
SGP30 Datasheet
DHT11 Datasheet

Code Appendix


# Facial Recognition
size = 4
import cv2, numpy, os
from pygame.locals import *
import RPi.GPIO as GPIO
import pygame,pigame
import time 
import sys

os.putenv('SDL_VIDEODRIVER', 'fbcon') # Display on piTFT
os.putenv('SDL_FBDEV', '/dev/fb1')
os.putenv('SDL_MOUSEDRV','dummy')
os.putenv('SDL_MOUSEDEV','/dev/null')
os.putenv('DISPLAY','')
GPIO.setmode(GPIO.BCM)
GPIO.setup(27,GPIO.IN)


haar_file = '/home/pi/Project/facialversion1/haarcascade_frontalface_default.xml'

datasets = '/home/pi/Project/facialversion1/Data'

print('Training classifier...')
# Create a list of images and a list of corresponding names along with a unique id
(images, labels, names, id) = ([], [], {}, 0)
for (subdirs, dirs, files) in os.walk(datasets):
    for subdir in dirs:
	# the person's name is the name of the sub_dataset created using the create_data.py file
        names[id] = subdir
        subjectpath = os.path.join(datasets, subdir)
        for filename in os.listdir(subjectpath):
            path = subjectpath + '/' + filename
            label = id
            images.append(cv2.imread(path, 0))
            labels.append(int(label))
        id += 1
(width, height) = (130, 100)


def rounded_rectangle(img, pt1, pt2, color, thickness, r, d):
    x1,y1 = pt1
    x2,y2 = pt2
 
    # Top left
    cv2.line(img, (x1 + r, y1), (x1 + r + d, y1), color, thickness)
    cv2.line(img, (x1, y1 + r), (x1, y1 + r + d), color, thickness)
    cv2.ellipse(img, (x1 + r, y1 + r), (r, r), 180, 0, 90, color, thickness)
 
    # Top right
    cv2.line(img, (x2 - r, y1), (x2 - r - d, y1), color, thickness)
    cv2.line(img, (x2, y1 + r), (x2, y1 + r + d), color, thickness)
    cv2.ellipse(img, (x2 - r, y1 + r), (r, r), 270, 0, 90, color, thickness)
 
    # Bottom left
    cv2.line(img, (x1 + r, y2), (x1 + r + d, y2), color, thickness)
    cv2.line(img, (x1, y2 - r), (x1, y2 - r - d), color, thickness)
    cv2.ellipse(img, (x1 + r, y2 - r), (r, r), 90, 0, 90, color, thickness)
 
    # Bottom right
    cv2.line(img, (x2 - r, y2), (x2 - r - d, y2), color, thickness)
    cv2.line(img, (x2, y2 - r), (x2, y2 - r - d), color, thickness)
    cv2.ellipse(img, (x2 - r, y2 - r), (r, r), 0, 0, 90, color, thickness)


# Initialize the Pygame 
pygame.init()
pygame.mouse.set_visible(False)
# Basic configuration for the game
size = (width, height) = (320, 240)
pitft = pigame.PiTft()

screen = pygame.display.set_mode(size)
FPS = 60
clock = pygame.time.Clock()
WHITE = 255,255,255
BLACK = 0,0,0
RED = 255,0,0
GREEN = 0,255,0
pygame.display.update()

unknown_start_time = None


# Create a numpy array from the lists above
(images, labels) = [numpy.array(lists) for lists in [images, labels]]

# OpenCV trains a model from the images using the Local Binary Patterns algorithm
model = cv2.face.LBPHFaceRecognizer_create()
# train the LBP algorithm on the images and labels we provided above
model.train(images, labels)

face_cascade = cv2.CascadeClassifier(haar_file)
webcam = cv2.VideoCapture(0)
print('Classifier trained!')
print('Attempting to recognize faces...')
while True:
    (_, im) = webcam.read()
    im = cv2.flip(im, -1)
    gray = cv2.cvtColor(im, cv2.COLOR_BGR2GRAY)
    # detect faces using the haar_cacade file
    faces = face_cascade.detectMultiScale(gray, 1.3, 5)
    for (x,y,w,h) in faces:
        # colour = bgr format
	# draw a rectangle around the face and resizing/ grayscaling it
	# uses the same method as in the create_data.py file
        # cv2.rectangle(im,(x,y),(x + w,y + h),(0, 255, 255),2)
        face = gray[y:y + h, x:x + w]
        face_resize = cv2.resize(face, (width, height))
        # try to recognize the face(s) using the resized faces we made above
        prediction = model.predict(face_resize)
        rounded_rectangle(im, (x, y), (x + w, y + h), (0, 255, 255), 2, 15, 30)
        # if face is recognized, display the corresponding name
        if prediction[1] < 74:
            cv2.putText(im,'%s' % (names[prediction[0]].strip()),(x + 5, (y + 25) + h), cv2.FONT_HERSHEY_PLAIN,1.5,(20,185,20), 2)
            # print the confidence level with the person's name to standard output
            confidence = (prediction[1]) if prediction[1] <= 100.0 else 100.0
            print("predicted person: {}, confidence: {}%".format(names[prediction[0]].strip(), round((confidence / 74.5) * 100, 2)))

            with open("servo.py") as f:
                code = f.read()
                exec(code)
                sys.exit()  

        # if face is unknown (if classifier is not trained on this face), show 'Unknown' text...
        else:
            cv2.putText(im,'Unknown',(x + 5, (y + 25) + h), cv2.FONT_HERSHEY_PLAIN,1.5,(65,65, 255), 2)
            print("predicted person: Unknown")

    # show window and set the window title
    rgb_image = cv2.cvtColor(im, cv2.COLOR_BGR2RGB)
    pygame_image = pygame.image.frombuffer(rgb_image.tobytes(), rgb_image.shape[1::-1], 'RGB')
    #screen = pygame.display.set_mode((rgb_image.shape[1], rgb_image.shape[0]))
    screen.blit(pygame_image, (0, 0))
    pygame.display.flip()
    key = cv2.waitKey(10)
    # esc to quit applet
    if not GPIO.input(27):
            break

webcam.release()
pygame.quit()
GPIO.cleanup()




# Admin Program
			
import os
import cv2, numpy, sys, os, time
import shutil
from pygame.locals import *
import pygame,pigame
import RPi.GPIO as GPIO

os.putenv('SDL_VIDEODRIVER', 'fbcon') # Display on piTFT
os.putenv('SDL_FBDEV', '/dev/fb1')
os.putenv('SDL_MOUSEDRV','dummy')
os.putenv('SDL_MOUSEDEV', '/dev/null')
os.putenv('DISPLAY','')
pygame.init()
pitft= pigame.PiTft()
haar_file = '/home/pi/Project/facialversion1/haarcascade_frontalface_default.xml'
datasets = '/home/pi/Project/facialversion1/Data'
# Basic configuration for the game
size = (width, height) = (320, 240)
screen = pygame.display.set_mode(size)
FPS = 40
clock = pygame.time.Clock()
WHITE = 255,255,255
BLACK = 0,0,0
RED = 255,0,0
GREEN = 0,255,0
GPIO.setmode(GPIO.BCM)
GPIO.setup(27,GPIO.IN)
GPIO.setup(17,GPIO.IN)
button_positions = [(35, 40), (175, 40), (35, 140), (175, 140)]
Mbutton= {"Main Menu": (280, 0)}
buttons = {
    "Add User": (35, 40),
    "Show Users": (175, 40),
    "Delete User": (35, 140),
    "Exit": (175, 140)
}
def draw_M():
    for text, pos in Mbutton.items():
        pygame.draw.rect(screen, GREEN, (pos[0], pos[1], 60, 30))
        label = font.render(text, 1, GREEN)
        label_rect = label.get_rect(center=(pos[0] -20, pos[1] - 20))
        screen.blit(label, label_rect)
def handle_M(touch_pos):
    for button_text, button_pos in Mbutton.items():
        button_rect = pygame.Rect(button_pos[0], button_pos[1], 60, 30)
        if button_rect.collidepoint(touch_pos):
            print(f"Button {button_text} pressed")
            if button_text == "Main Menu":
                return True  # Return True when "Main Menu" button is pressed
    return False  # Return False otherwise
def draw_buttons(buttons):
    for user, pos in buttons.items():
        pygame.draw.rect(screen, WHITE, (pos[0], pos[1], 120, 60))
        label = font.render(user, True, GREEN if user != "Empty" else RED)
        label_rect = label.get_rect(center=(pos[0] + 60, pos[1] + 30))
        screen.blit(label, label_rect)
def display_first_pic(user):
    screen.fill(WHITE)
    user_path = os.path.join(datasets, user)
    image_files = sorted([os.path.join(user_path, file) for file in os.listdir(user_path) if file.endswith('.png')])
    if image_files:
        first_image_file = image_files[0]
        image = pygame.image.load(first_image_file)
        screen.blit(image, (0, 0))
        draw_M()  # Draw the "Main Menu" button
        pygame.display.flip()
        running = True
        while running:
            pitft.update()
            for event in pygame.event.get():
                if event.type == pygame.QUIT:
                    running = False
                    pygame.quit()
                    sys.exit()
                elif event.type == pygame.MOUSEBUTTONUP:  # Detect touch or click release
                    touch_pos = pygame.mouse.get_pos()
                    if handle_M(touch_pos):  # Handle the touch event for "Main Menu" button
                        return  # Return control back to the show_all_users function

def draw_game(buttons):
    screen.fill(BLACK)
    draw_buttons(buttons)
    pygame.display.flip()

def get_user_buttons():
    users = os.listdir(datasets)[:4]  
    while len(users) < 4:
        users.append("Empty")  # Fill remaining slots if fewer than four users
    buttons = {user: button_positions[i] for i, user in enumerate(users)}
    return buttons


def show_all_users():
    screen.fill(BLACK)
    draw_M()
    users = get_user_buttons()
    draw_buttons(users)
    pygame.display.flip()
    running = True
    while running:
        pitft.update()
        for event in pygame.event.get():
            if event.type == pygame.QUIT:
                running = False
                pygame.quit()
                sys.exit()
            elif event.type == pygame.MOUSEBUTTONUP:  # Detect touch or click release
                touch_pos = pygame.mouse.get_pos()
                if handle_M(touch_pos):  # Handle the touch event for "Main Menu" button
                    return  # Return control back to the main event loop
                for user, pos in users.items():
                    button_rect = pygame.Rect(pos[0], pos[1], 120, 60)
                    if button_rect.collidepoint(touch_pos) and user != "Empty":
                        display_first_pic(user)
                        show_all_users()
                        break  # Display the first image from the user's dataset
        pygame.display.flip()

def add_user():
    predefined_names = {"Wenjun": (35, 40), "Kuang Sheng": (175, 40), "TEST": (35, 140), "Demo": (175, 140)}
    screen.fill(BLACK)
    draw_buttons(predefined_names)
    draw_M()
    pygame.display.flip()
    running = True
    while running:
        pitft.update()
        for event in pygame.event.get():
            if event.type == pygame.QUIT:
                running = False
                pygame.quit()
                sys.exit()

            elif event.type == pygame.MOUSEBUTTONUP:  # Detect touch or click release
                touch_pos = pygame.mouse.get_pos()
                if handle_M(touch_pos):  # Handle the touch event for "Main Menu" button
                    return
                for name,(button_x,button_y) in predefined_names.items():
                    button_rect = pygame.Rect(button_x, button_y, 120, 60)
                    if button_rect.collidepoint(touch_pos):
                        user_name = name
                        running = False
                        break
    pygame.display.update()
    path = os.path.join(datasets, user_name)
    if not os.path.isdir(path):
        os.mkdir(path)
    (width, height) = (130, 100)
    face_cascade = cv2.CascadeClassifier(haar_file)
    webcam = cv2.VideoCapture(0)
    time.sleep(2)
    count = 1
    print("Taking pictures")
    while count < 201:
        ret_val, im = webcam.read()
        if ret_val == True:
            im= cv2.flip(im,-1)
            gray = cv2.cvtColor(im, cv2.COLOR_BGR2GRAY)
            faces = face_cascade.detectMultiScale(gray, 1.3, 4)
            for (x,y,w,h) in faces:
                cv2.rectangle(im,(x,y),(x+w,y+h),(0,255,0),2)
                face = gray[y:y + h, x:x + w]
                face_resize = cv2.resize(face, (width, height))
                cv2.imwrite('%s/%s.png' % (path,count), face_resize)
            count += 1
            rgb_image = cv2.cvtColor(im, cv2.COLOR_BGR2RGB)
            pygame_image=pygame.image.frombuffer(rgb_image.tostring(),rgb_image.shape[1::-1],"RGB")
            screen.blit(pygame_image,(0,0))
            pygame.display.flip()
            if not GPIO.input(27):
                admin()
    print("User Added!")
    webcam.release()
    cv2.destroyAllWindows()
 # Update the whole screen once per frame

def delete_user():
    screen.fill(BLACK)
    draw_M()  # Draw the "Main Menu" button
    users_buttons = get_user_buttons()
    draw_buttons(users_buttons)

    running = True
    while running:
        pitft.update()  # Update the touch or mouse position
        for event in pygame.event.get():
            if event.type == pygame.QUIT:
                running = False
                pygame.quit()
                sys.exit()

            elif event.type == pygame.MOUSEBUTTONUP:  # Detect touch or click release
                touch_pos = pygame.mouse.get_pos()
                if handle_M(touch_pos):  # Handle the touch event for "Main Menu" button
                    return  # Return control back to the main event loop
                # Existing functionality for deleting a user
                for user, pos in users_buttons.items():
                    button_rect = pygame.Rect(pos[0], pos[1], 120, 60)
                    if button_rect.collidepoint(touch_pos) and user != "Empty":
                        user_path = os.path.join(datasets, user)
                        try:
                            shutil.rmtree(user_path)
                            print(f"Deleted user: {user}")
                            screen.fill((0, 0, 0))  # Clear the screen
                            draw_M()  # Redraw the "Main Menu" button
                            users_buttons = get_user_buttons()  # Update the user buttons
                            draw_buttons(users_buttons)  # Redraw the user buttons
                        except OSError as e:
                            print(f"Error: {e.strerror}")

        pygame.display.flip()  # Update the whole screen once per frame
font = pygame.font.Font(None, 20)
def exit():
    pygame.quit()
    sys.exit()
def check_buttons(touch_pos):
  x,y = touch_pos
  if (x>35 and y>40) and (x<155 and y<100):
    add_user()
  if (x>175 and y>40) and (x<295 and y<100):
    show_all_users()
  if (x>35 and y>140) and (x<155 and y<200):
    delete_user()
  if (x>175 and y>140) and (x<295 and y<200):
    exit()

def handle_touch(pos):
    """Handle touch screen presses."""
    for button_text, button_pos in buttons.items():
        button_rect = pygame.Rect(button_pos[0], button_pos[1], 120, 60)
        if button_rect.collidepoint(pos):
            print(f"Button {button_text} pressed")
            if button_text == "Add User":
                add_user()
            elif button_text == "Show Users":
                show_all_users()
            elif button_text == "Delete User":
                delete_user()
            elif button_text == "Exit":
                exit()    

touch_pos= (0,0)
pygame.display.flip()
def admin():
    running = True
    screen.fill(BLACK)
    draw_buttons(buttons)  # Initial draw of buttons
    while running:
        pitft.update()
        #screen.fill(BLACK)
        draw_buttons(buttons)
        pygame.display.flip()
        touch_pos= None
        for event in pygame.event.get():
            if event.type == pygame.QUIT:
                running = False

            elif event.type == pygame.MOUSEBUTTONUP:
                touch_pos = pygame.mouse.get_pos()
                print(touch_pos)
                handle_touch(touch_pos)
                screen.fill(BLACK)  # Handles all touch logic
        


if __name__ == '__main__':
    admin()




"""
@file servo.py
@description RPi as an MQTT Brokder
"""
import pygame
import os
import time
from pygame.locals import *
import paho.mqtt.client as mqtt
import RPi.GPIO as GPIO

# MQTT Broker
MQTT_BROKER = 'localhost'
MQTT_PORT = 1883
MQTT_TOPIC_TEMP = 'home/temperature'
MQTT_TOPIC_HUM = 'home/humidity'
MQTT_TOPIC_CO2 = 'home/CO2'
MQTT_TOPIC_TVOC = 'home/TVOC'
MQTT_TOPIC_SERVO = 'home/servo'

# Variables to store message data
temperature = ""
humidity = ""
co2_level = ""
tvoc_level = ""
welcome_msg = "Welcome worker!"
welcome_msg2 = "Your office environment:"

# GPIO setup for servos
GPIO.setmode(GPIO.BCM)
#The issue appeared as belows: tft changed to black at all
GPIO.setup(24, GPIO.OUT)  # Servo A
GPIO.setup(15, GPIO.OUT)  # Servo B
pwmA = GPIO.PWM(24, 50)  # 50 Hz (common for servos)
pwmB = GPIO.PWM(15, 50)
pwmA.start(0)
pwmB.start(0)

# Initial angle for both servos
angleA = 90
angleB = 90

def test1():
    global angleA
    if angleA > 10: angleA -= 10
    set_angle(pwmA, angleA)

def test2():
    global angleA
    if angleA < 170: angleA += 10
    set_angle(pwmA, angleA)
    
def test3():
    global angleB
    if angleB > 10: angleB -= 10
    set_angle(pwmB, angleB)


def test4():
    global angleB
    if angleB < 170: angleB += 10
    set_angle(pwmB, angleB)




def set_angle(pwm, angle):
    #print("receive com")
    duty = angle / 18 + 2
    pwm.ChangeDutyCycle(duty)
    time.sleep(0.1)
    pwm.ChangeDutyCycle(0)

def on_connect(client, userdata, flags, rc):
    print("Connected with result code " + str(rc))
    client.subscribe(MQTT_TOPIC_TEMP)
    client.subscribe(MQTT_TOPIC_HUM)
    client.subscribe(MQTT_TOPIC_CO2)
    client.subscribe(MQTT_TOPIC_TVOC)
    client.subscribe(MQTT_TOPIC_SERVO)  #CREATE ISSUE

def on_message(client, userdata, msg):
    global temperature, humidity, co2_level, tvoc_level
    if msg.topic == MQTT_TOPIC_TEMP:
        temperature = str(msg.payload.decode("utf-8"))
    elif msg.topic == MQTT_TOPIC_HUM:
        humidity = str(msg.payload.decode("utf-8"))
    elif msg.topic == MQTT_TOPIC_CO2:
        co2_level = str(msg.payload.decode("utf-8"))
    elif msg.topic == MQTT_TOPIC_TVOC:
        tvoc_level = str(msg.payload.decode("utf-8"))
    elif msg.topic == MQTT_TOPIC_SERVO:
        
        command = str(msg.payload.decode("utf-8"))
        if command == "pl":
            test3()
            #print("receive com")
            #set_angle(pwmA, 10)  
        elif command == "pr":
            test4()
            #set_angle(pwmA, 170)  
        elif command == "tu":
            test1()
            #set_angle(pwmB, 170)  
        elif command == "td":
            test2()
            #set_angle(pwmB, 10)  
    print(f"Received message: {msg.topic} {str(msg.payload)}")

client = mqtt.Client(mqtt.CallbackAPIVersion.VERSION1)
client.on_connect = on_connect
client.on_message = on_message

client.connect(MQTT_BROKER, MQTT_PORT, 60)
client.loop_start()

# Initialize pygame

os.putenv('SDL_VIDEODRIVER', 'fbcon')
os.putenv('SDL_FBDEV', '/dev/fb1')
pygame.init()
pygame.mouse.set_visible(True)

WHITE = 255, 255, 255
BLACK = 0, 0, 0

screen = pygame.display.set_mode((320, 240))
screen.fill(BLACK)
pygame.display.update()

font = pygame.font.Font(None, 30)

quit_prog = False
time_limit = 200
start_time = time.time()


#Set up a physical 'bail out' button: Button 22, the third button on piTFT
#GPIO.setmode(GPIO.BCM)
GPIO.setup(22,GPIO.IN,pull_up_down=GPIO.PUD_UP)

#Callback function for bail out button
def GPIO22_callback(channel):
    global quit_prog
    quit_prog = True
    print("Button 22 has been pressed and quit!")   

GPIO.add_event_detect(22, GPIO.FALLING, callback=GPIO22_callback, bouncetime=300)


border_rect = pygame.Rect(25, 75, 220, 120)

# Main Loop
while True:
    if quit_prog:
        pygame.quit()
        break

    now = time.time()
    elapsed_time = now - start_time
    if elapsed_time > time_limit:
        pygame.quit()
        break

    screen.fill(BLACK)

    # Display MQTT data
    temp_text = font.render(f"{temperature}", True, WHITE)
    hum_text = font.render(f"{humidity}", True, WHITE)
    co2_text = font.render(f"{co2_level}", True, WHITE)
    tvoc_text = font.render(f"{tvoc_level}", True, WHITE)

    welcome_text = font.render(f"{welcome_msg}", True, WHITE)
    welcome_text2 = font.render(f"{welcome_msg2}", True, WHITE)

    screen.blit(welcome_text, (10, 20))  
    screen.blit(welcome_text2, (10, 50))
    screen.blit(temp_text, (30, 80))
    screen.blit(hum_text, (30, 110))
    screen.blit(co2_text, (30, 140))
    screen.blit(tvoc_text, (30, 170))

    pygame.draw.rect(screen, WHITE, border_rect, 1)  # '1' is the border thickness

    # Scan touchscreen events
    for event in pygame.event.get():
        if event.type == MOUSEBUTTONDOWN:
            x, y = pygame.mouse.get_pos()
        elif event.type == MOUSEBUTTONUP:
            x, y = pygame.mouse.get_pos()
            print(x, y)

    pygame.display.flip()
    time.sleep(0.1)




"""
@file esp32.py
@description The code is for ESP32 Feather board, should be run under the Arduino IDE environment.
Settings should be configured to adapt to different network conditions.
"""
#include 
#include 
#include "Adafruit_SGP30.h"
#include "DHT.h"
#include "BluetoothSerial.h"

#define DHTPIN 26     // Digital pin connected to the DHT sensor
#define DHTTYPE DHT11   // DHT 11

// Bluetooth Serial object
BluetoothSerial SerialBT;

// WiFi SSID and password
//Note that the devices should be registered for Cornell WiFi 'RedRover'
//Leave the password blank, only register the Mac address here: https://it.cornell.edu/wifi/register-device-doesnt-have-browser
const char* ssid = "RedRover";
const char* password = "";

// GPIO for LED and Fan
const int ledPin =  25;
const int fanPin = 21;

// MQTT Broker -> Should be the IP address of the RPI
const char* mqtt_server = "10.49.75.205"; //"10.49.242.114";
const int mqtt_port = 1883; //Default port number is 1883

WiFiClient espClient;
PubSubClient client(espClient);

long lastMsg = 0;  
char msg[50];
int value = 0;

//Bluetooth
String message = "";
char incomingChar;

//set up DHT11
DHT dht(DHTPIN, DHTTYPE);
//set up SGP30
Adafruit_SGP30 sgp;

//Set up Wi-Fi
void setup_wifi() {
    delay(10);
    Serial.println();
    Serial.print("Connecting to ");
    Serial.println(ssid);

    WiFi.begin(ssid, password);

    while (WiFi.status() != WL_CONNECTED) {
        delay(500);
        Serial.print(".");
    }
    Serial.println("");
    Serial.println("WiFi connected");
    Serial.println("IP address: ");
    Serial.println(WiFi.localIP());
}

//Reconnect to the Wi-Fi if failed
void reconnect() {
    
    while (!client.connected()) {
        Serial.print("Attempting MQTT connection...");
        // Attempt to reconnect
        if (client.connect("ESP8266Client")) {
            Serial.println("connected");
        } else {
            Serial.print("failed, rc=");
            Serial.print(client.state());
            Serial.println(" try again in 5 seconds");
            //Retry after 5 sec
            delay(5000);
        }
    }
}


void setup() {
    Serial.begin(115200);
    
    pinMode(ledPin, OUTPUT);
    pinMode(fanPin, OUTPUT);
    
    //WiFi set up
    setup_wifi();
    
    SerialBT.begin("ESP32");
    Serial.println("The device started, now you can pair it with bluetooth!");
    
    //SGP30 Sensor set up
    if (!sgp.begin()){
      Serial.println("Sensor not found :(");
      while (1);
    }
    Serial.print("Found SGP30 serial #");
    Serial.print(sgp.serialnumber[0], HEX);
    Serial.print(sgp.serialnumber[1], HEX);
    Serial.println(sgp.serialnumber[2], HEX);
    
    //DHT11 Sensor set up
    dht.begin();

    //MQTT set up
    client.setServer(mqtt_server, mqtt_port);
}

//Main loop
void loop() {
    if (!client.connected()) {
        reconnect();
    }
    client.loop();

    // Read received messages (LED control command)
    if (SerialBT.available()){
      char incomingChar = SerialBT.read();
      if (incomingChar != '\n'){
        message += String(incomingChar);
      }
      else{
        message = "";
      }
      Serial.write(incomingChar);  
    }
    // Check received message and control output accordingly

    if (message == "led") {
      // Toggle the LED state
      int currentState = digitalRead(ledPin);
      digitalWrite(ledPin, !currentState);
    } else if (message =="fan_on"){
   //if (message =="led_on"){
   //   digitalWrite(ledPin, HIGH);
   // } else if (message =="led_off"){
    //  digitalWrite(ledPin, LOW);
    //} else if (message =="fan_on"){
      digitalWrite(fanPin, HIGH);
    } else if (message =="fan_off"){
      digitalWrite(fanPin, LOW);
    } else if (message =="pl"){
      client.publish("home/servo", "pl");
    } else if (message =="pr"){
      client.publish("home/servo", "pr");
    } else if (message =="tu"){
      client.publish("home/servo", "tu");
    } else if (message =="td"){
      client.publish("home/servo", "td");
    }


    long now = millis();
    if (now - lastMsg > 1000) { //Read and send every 10 sec (10000)
        lastMsg = now;
        //digitalWrite(ledPin, HIGH);

        // Read sensor data from DTH11
        float h = dht.readHumidity();
        float t = dht.readTemperature(); // In Celsius
        // float f = dht.readTemperature(true); // Uncomment if we need Fahrenheit scale

        // Read sensor data and publish
        if (isnan(h) || isnan(t)) {
            Serial.println(F("Failed to read from DHT sensor!"));
        } else if (!sgp.IAQmeasure()) {
            Serial.println("Measurement failed");
        } else {
            // Collect sensor data and save in string
            char temperatureMsg[50];
            snprintf(temperatureMsg, sizeof(temperatureMsg), "Temperature: %.2f C", t);
            char humidityMsg[50];
            snprintf(humidityMsg, sizeof(humidityMsg), "Humidity: %.2f %%", h);

            char co2Msg[10];
            snprintf(co2Msg, sizeof(co2Msg), "eCO2: %d ppm", sgp.eCO2);
            char tvocMsg[10];
            snprintf(tvocMsg, sizeof(tvocMsg), "TVOC: %d ppb", sgp.TVOC);

            // Publish the sensor data to these topics
            client.publish("home/temperature", temperatureMsg);
            client.publish("home/humidity", humidityMsg);
            client.publish("home/CO2", co2Msg);
            client.publish("home/TVOC", tvocMsg);

    
        }
    }

    //delay(2000);
}